6 December 2024
AI, Media and Democracy Lab: Why Bother with AI Transparency?
Key Concerns
Participants expressed concerns about AI in journalism, including:
One participant summed it up: “If you can’t tell what’s real or fake, it could lead to distrust in everything we read.”
The Desire for Disclosure
Despite concerns, participants strongly favored transparency. They expressed a clear desire for AI-generated content to be distinctly labeled, with many calling for:
Recommendations for News Organizations
From these findings, three key takeaways emerged:
- Transparency is essential: Clearly label AI-generated content to address public concerns.
- Tailored disclosures: Different audiences may need varying levels of detail about AI involvement.
- Build digital literacy: Combine transparency with education to empower readers in navigating AI content.
While transparency alone won’t solve all trust issues in journalism, it’s a critical step. As one participant suggested, “I don’t mind AI-written articles, but I want to know. Just be clear.”As AI adoption continues to grow, news organizations must balance innovation with accountability, ensuring audiences remain informed and engaged.
This research reflects an important step toward understanding how transparency can rebuild trust in AI-driven journalism.Stay connected with the AI, Media and Democracy Lab.
Vergelijkbaar >
Similar news items

May 29
Building responsibly on foundation models: practical guide by Utrecht University of Applied Sciences and RAAIT
read more >

May 29
SER: Put people first in the implementation of AI at work
read more >

May 27
🌞 Open Space: AI meets Science Communication – will you take the stage?
read more >